507 research outputs found

    Country report - Finland

    Get PDF

    Ideophones in Japanese modulate the P2 and late positive complex responses

    Get PDF
    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language

    The Effect of Visual Perceptual Load on Auditory Awareness of Social vs. Non-social Stimuli in Individuals with Autism

    Get PDF
    This study examined the effect of increasing visual perceptual load on auditory awareness for social and non-social stimuli in adolescents with autism spectrum disorder (ASD, n = 63) and typically developing (TD, n = 62) adolescents. Using an inattentional deafness paradigm, a socially meaningful ('Hi') or a non-social (neutral tone) critical stimulus (CS) was unexpectedly presented under high and low load. For the social CS both groups continued to show high awareness rates as load increased. Awareness rates for the non-social stimulus were reduced when load increased for the TD, but not the ASD group. The findings indicate enhanced capacity for non-social stimuli in ASD compared to TD, and a special attentional status for social stimuli in the TD group

    Relationship between speech-evoked neural responses and perception of speech in noise in older adults

    Get PDF
    Speech-in-noise (SPIN) perception involves neural encoding of temporal acoustic cues. Cues include temporal fine structure (TFS) and envelopes that modulate at syllable (Slow-rate ENV) and fundamental frequency (F0-rate ENV) rates. Here the relationship between speech-evoked neural responses to these cues and SPIN perception was investigated in older adults. Theta-band phase-locking values (PLV) that reflect cortical sensitivity to Slow-rate ENV and peripheral/brainstem frequency-following responses phase-locked to F0-rate ENV (FFRENV_F0) and TFS (FFRTFS) were measured from scalp-EEG responses to a repeated speech syllable in steady-state speech-shaped (SpN) and 16-speaker babble (BbN) noises. The results showed that: 1) SPIN performance and PLV were significantly higher under SpN than BbN, implying differential cortical encoding may serve as the neural mechanism of SPIN performance that varies as a function of noise types; 2) PLV and FFRTFS at resolved harmonics were significantly related to good SPIN performance, supporting the importance of phase-locked neural encoding of Slow-rate ENV and TFS of resolved harmonics during SPIN perception; 3) FFRENV_F0 was not associated to SPIN performance until audiometric threshold was controlled for, indicating that hearing loss should be carefully controlled when studying the role of neural encoding of F0-rate ENV. Implications are drawn with respect to fitting auditory prostheses

    Trait Anxiety Effects on Late Phase Threatening Speech Processing: Evidence from Electroencephalography

    Get PDF
    The effects of threatening stimuli, including threatening language, on trait anxiety have been widely studied. However, whether anxiety levels have a direct effect on language processing has not been so consistently explored. The present study focuses on event-related potential (ERP) patterns resulting from electroencephalographic (EEG) measurement of participants' (n = 36) brain activity while they perform a dichotic listening task. Participants' anxiety level was measured via a behavioural inhibition system scale (BIS). Later, participants listened to dichotically paired sentences, one neutral and the other threatening, and indicated at which ear they heard the threatening stimulus. Threatening sentences expressed threat semantically-only, prosodically-only, or both combined (congruent threat). ERPs showed a late positivity, interpreted as a late positive complex (LPC). Results from Bayesian hierarchical models provided strong support for an association between LPC and BIS score. This was interpreted as an effect of trait anxiety on deliberation processes. We discuss two possible interpretations. On the one hand, verbal repetitive thinking, as associated with anxious rumination and worry, can be the mechanism disrupting late phase deliberation processes. Instantiated by inner speech, verbal repetitive thinking might be the vehicle of anxiety-related reappraisal and/or rehearsal. On the other hand, increased BIS could be simply affecting an extended evaluation stage as proposed by multistep models, maybe due to over-engagement with threat or to task-related effects

    The effect of age and hearing loss on partner-directed gaze in a communicative task

    Get PDF
    The study examined the partner-directed gaze patterns of old and young talkers in a task (DiapixUK) that involved two people (a lead talker and a follower) engaging in a spontaneous dialogue. The aim was (1) to determine whether older adults engage less in partner-directed gaze than younger adults by measuring mean gaze frequency and mean total gaze duration; and (2) examine the effect that mild hearing loss may have on older adult’s partner-directed gaze. These were tested in various communication conditions: a no barrier condition; BAB2 condition in which the lead talker and the follower spoke and heard each other in multitalker babble noise; and two barrier conditions in which the lead talker could hear clearly their follower but the follower could not hear the lead talker very clearly (i.e., the lead talker’s voice was degraded by babble (BAB1) or by a Hearing Loss simulation (HLS). 57 single-sex pairs (19 older adults with mild Hearing Loss, 17 older adults with Normal Hearing and 21 younger adults) participated in the study. We found that older adults with normal hearing produced fewer partner-directed gazes (and gazed less overall) than either the older adults with hearing loss or younger adults for the BAB1 and HLS conditions. We propose that this may be due to a decline in older adult’s attention to cues signaling how well a conversation is progressing. Older adults with hearing loss, however, may attend more to visual cues because they give greater weighting to these for understanding speech

    Phonetic categorisation and cue weighting in adolescents with Specific Language Impairment (SLI)

    Get PDF
    This study investigates phonetic categorisation and cue weighting in adolescents and young adults with Specific Language Impairment (SLI). We manipulated two acoustic cues, vowel duration and F1 offset frequency, that signal word-final stop consonant voicing ([t] and [d]) in English. Ten individuals with SLI (14.0–21.4 years), 10 age-matched controls (CA; 14.6–21.9 years) and 10 non-matched adult controls (23.3–36.0 years) labelled synthetic CVC non-words in an identification task. The results showed that the adolescents and young adults with SLI were less consistent than controls in the identification of the good category representatives. The group with SLI also assigned less weight to vowel duration than the adult controls. However, no direct relationship between phonetic categorisation, cue weighting and language skills was found. These findings indicate that some individuals with SLI have speech perception deficits but they are not necessarily associated with oral language skills

    More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension

    Get PDF
    The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing
    • …
    corecore